Increased Productivity Through ModelTimeā„¢ Behaviors

 

Paul Mlyniec and Dan Mapes
MultiGen Inc.
San Jose, California 95128


ABSTRACT

Recent attention has been focused on the assignment of behaviors to graphical entities. Most of these behaviors are targeted for the runtime environment. By defining ModelTime behaviors, behaviors whose sole purpose is to help with the task of modeling and whose scope is limited to the modeling environment itself, great gains in productivity can be realized in rapid assembly tasks.

Keywords: ModelTime behaviors, realtime, runtime, assembly, modeling, modeler (the program), Modeler (the human), smart face

1. INTRODUCTION

1.1. The Goals

In the world of visual simulation modeling, the goal is to build scenes which appear rich in apparent visual content and yet render in realtime. A realtime model can only be as complex as the target rendering hardware will allow. Current realtime modeling tools support such techniques as level of detail switching and separating plane trees to increase the perceived complexity of a scene and yet remain within the polygon budget of the target hardware.

As the visual simulation market broadens, it is also important to put the tools in the hands of a new, less technical user base. Technical considerations should not dominate the act of creating a visual scene. Decoupling the act of creation from the technical implementation of a visual system should be a primary focus for the visual simulation industry.

1.2. The Problem

Unfortunately, simulation modeling is a slow, tedious task, often requiring months of concentrated effort to produce a single scene. As realtime rendering hardware speeds up, the allowable scene complexity is also increasing, expanding the demands on the database modeling package (modeler). The tools used to build simulation models, although powerful and convenient compared to their predecessors still require great attention to detail. Significant ramp-up time is required to effectively use many of the existing tools. Knowledge of the target hardware is required if the Modeler is to produce efficient models. These requirements of time, training, and knowledge rule out everyone but the professional Modeler from producing scenes for realtime visual simulation. Clearly, better tools are needed to allow the casual and professional Modeler to more easily create effective scenes.

1.3. The Current Approach

Typically, the creation of a scene for visual simulation consists of creating the elements in that scene from the ground up. Occasional reuse of scene elements such as a vehicle or building can improve productivity when these elements are available, but even when they are, their overuse can appear obviously repetitive.

Most visual simulation modeling packages treat a visual scene as a collection of parts made up of polygons. These parts and polygons are typically passive visual elements that must be assembled manually. No knowledge of how these scene elements fit together aside from their geometry is available to the tools doing the fitting. The onus is on the Modeler to interpret how the geometry can be used with the tools at hand to fit the parts together. Adding new features to a modeling package allows for new ways to manipulate and combine scene elements, but it also clutters the interface making it harder to locate the appropriate tool for a given task.

2. A SOLUTION

To allow a non-professional to quickly produce efficient scenes requires a fresh approach to the process of scene creation. The solution presented below consists of a simple combination of techniques that when taken together addresses this goal. These techniques include switching the scene creation paradigm away from modeling and towards assembly, adding behaviors to the scene elements that aid in the assembly task, and strategies for increasing apparent complexity and variety from a limited number of scene elements. The approach is pragmatic, attacking the problem on multiple fronts.

2.1. Assemble Rather Than Model

Creating the individual parts within a simulation scene such as a building or chair is slow, requiring the creation of the geometry and texture patterns that describe that object. Placing a prefabricated object in a scene is relatively quick and easy. Rather than creating scene elements from the ground up polygon by polygon (modeling), and instead starting from identifiable elements such as the building or chair and simply placing these elements in the scene (assembly) can yield considerable gains in productivity. Although this is perhaps an obvious proposition, it implies the availability of a comprehensive model library so that an appropriate model is available most of the time. This can be accomplished with model libraries based on market area such as exterior city, exterior country, interior residential, and so on. Other techniques for obtaining sufficient variety from a relatively limited library of models is discussed below.

Relying on model libraries addresses another of the concerns identified earlier: The need to keep the target hardware and general simulation issues in mind. Models created by experienced Modelers can be positioned by novices and still retain the efficiencies modeled in by the experts.

2.2. Move the Tools Into the Models

Treating models as passive elements that are manipulated by a set of tools is certainly a workable approach to positioning models in a scene. When generic tools are provided that apply equally to all scene elements, great flexibility is gained, but at the expense of chaining multiple operations to accomplish what seems to the Modeler to be a simple operation. An example of this is the task of placing the second story of a building on top of its first story. Typically this would consist of matching corner vertices and edges and stretching the second story into place. Using the generic tools, this task can be accomplished. but only after numerous steps. Adding a tool to the modeling package to accomplish this task, while useful in this situation and related instances, clutters the interface, and once a number of these 'common' situations is addressed, it becomes difficult to identify the appropriate tool for a given task.

An alternate approach is to migrate the modeling tools into the models themselves. By removing the emphasis from the tools provided by the modeling package and instead endowing the models themselves with tool behaviors, it is possible to both increase productivity and simplify the task of modeling. With a few visual cues, it can be made obvious how a model will behave in the modeling environment. The second story above could indicate graphically what sorts of entities it is capable of docking to and could snap into place once within range of a compatible first story. Letting the models absorb their appropriate tool behaviors adds the power of object-specific high-level tools without cluttering the interface.

An important aspect of injecting modeling tools into the models themselves is that changing the behavior of an object within the modeling environment is itself an act of modeling rather than of programming. In this way adding tools to an environment that supports ModelTime behaviors can be accomplished by a non-programmer.

Abstract assembly operations take on concrete meaning when the tools are tied to graphical entities in a scene. Nobody has to remind the Modeler what to do with a wheel when a car is present in the scene. So the user can leverage real world expectations in the modeling environment. Ideally, the tools become obvious and easy to use.

2.3. Appropriate Use of Immersion

The approach described here of combining an assembly paradigm with ModelTime behaviors is well suited to immersion. While precision freehand creation and positioning tasks are difficult in an immersive setting due to problems with calibration, update rate, lag, and display resolution, rough positioning of objects is natural when immersed. By letting ModelTime behaviors bridge the gap between rough positioning and precision alignment, significant productivity is attained. See [Mapes95] for a broader discussion of what sorts of tasks are appropriate when immersed.

The usefulness of rough positioning coupled with ModelTime behaviors is in no way limited to immersive interfaces. Rough positioning with the mouse coupled with ModelTime behaviors speeds up the task of scene assembly as well.

2.4. Behaviors

Much attention has been focused in recent years on adding behaviors to graphical entities in simulation scenes. Examples include endowing objects with physical attributes such as mass and moment of inertia, articulation, and autonomous behaviors such as predator and prey behaviors. Another important emphasis has been the migration of the description of these behaviors into the models themselves and away from the program that implements the behavior. Behaviors then become an attribute of the model similar to its visual attributes such as texture and color. This migration of behavior into the model allows non-programmers to modify the behavior of scene elements thus carrying behavior specification to a broader usership.

2.4.1. Runtime Behaviors

Generally speaking, most of the simulation behaviors to date have been targeted to the runtime environment. That is, the behaviors manifest themselves during the target simulation on the target platform and are intended for the person experiencing the simulation. Examples of such behaviors include raising and lowering of landing gear during a flight training session, encounters with an aggressive driver (an autonomous agent) during a driving simulation, swinging a door about its hinges, or bouncing a ball.

2.4.2. ModelTime Behaviors

In contrast to behaviors targeted for the runtime environment are behaviors referred to here as ModelTime behaviors whose sole purpose is to aid in the task of modeling. It is only the intent or purpose of a given behavior that classes it as runtime versus ModelTime. Any runtime behavior is a candidate for a ModelTime behavior provided it is being used to aid with scene creation rather than scene experience. Models can contain both runtime and ModelTime behaviors, but each is generally honored only in its own target environment. That is, runtime behaviors are exercised in the runtime and ModelTime behaviors are used while creating a scene. However, it is not uncommon to audition runtime behaviors in the modeling environment.

Modeltime behaviors are part of the models themselves. Embedding behaviors in the models has many advantages. A model can communicate its intended behavior to the modeling environment without its depending on general assumptions about its geometry. Consider specifying a planting action for a building. A number of questions must be answered at ModelTime to exercise this behavior when the building is placed in the scene: Plant on what? On the ground? Or am I looking for another story of another building to plant on? Alternately, could I be a wing of another building and therefore my planting direction is not down? These questions can be answered by the model itself by providing a language designed to answer all these questions and more. Tools can be provided to allow specifying these behaviors in a graphical manner. This in a sense allows a Modeler to model modeling tools rather than being forced to enlist a programmer to create the tools programmatically. If the tools allow combining or sequencing ModelTime behaviors, new higher-level behaviors can be specified by the modeler and custom behaviors can be created from existing behaviors by altering the parameters such as snap range used by the tools.

Numerous behaviors are identified and described below as potential candidates for productivity gains at ModelTime. This is just a sampling and is not intended as an exhaustive list.

2.4.2.1. Snapping and Magnetism

A useful and common operation in rapid assembly tasks is the precision alignment of objects with respect to one another. Snapping of vertices to other vertices in the scene can be accomplished with no knowledge of the high-level relationships between objects in the scene. Edge-to-edge alignment and face-to-face alignment are also useful. But snap alignment can require numerous steps to obtain the desired alignment. In addition, in a scene cluttered with geometry, snapping to the intended vertex can be a frustrating task, fraught with unintended vertex snaps. A multi-step, selective alignment operation is desirable.

By introducing higher-level smart snapping relationships through ModelTime behaviors stored with an object, unnecessary steps can be avoided and ambiguous intent can be resolved. For example, a smart face can be introduced into a model that at ModelTimeā„¢ looks only for other faces of compatible type. In this way the second story of a building can seek out its first floor, ignoring the terrain, houses, and even other multi-story buildings of incompatible type. Furthermore, once an appropriate target is detected, a multi-step behavior can be set into motion which amounts to the multiple steps required to fully position and orient the parts with respect to one another. As mentioned in a previous section, it is just this type of behavior that can bridge the gap between rough alignment and precision alignment in an immersive setting. The feel is that of a collection of magnetic objects which attract and repel in an intuitive fashion. Attractive forces can be specified independently for position and orientation snaps.

Scaling and stretching behaviors can also be triggered by proximity of compatible objects at ModelTime. For example, consider positioning a generic window in an opening of non-standard dimensions. A desirable ModelTime behavior might be for the window to scale into the opening. Two smart faces would be modeled into the objects, one in the opening, and one on the window.

2.4.2.2. Stretching and Threshold Behaviors

When an object is stretched, it may not be enough to uniformly grow it in the direction in which it is stretching. For example, simply growing the first floor of a skyscraper when it is stretched would result in very tall windows. By introducing the concept of a stretch threshold behavior, a behavior which is invoked each time a certain stretch condition is met, stretching the building could have the effect of adding stories to it by repeating geometry or texture appropriately.

2.4.2.3. Physical Attributes

Assigning physical attributes to objects for exercise in the runtime is commonplace. Certain physical properties can be useful at ModelTime as well. The physical extent of an object can be used to support collision modeling. That is, not allowing certain objects to penetrate one another in the modeling environment. For example, it is convenient to be able to push a model of a bookcase into the corner of a modeled room, using the room itself as the alignment tool. Which surfaces in an object to consider in collision modeling (the bounding box of the bookcase) and what objects in the scene to consider collidable (the room) can be stored in the model itself.

Gravity can be used to manipulate objects in a scene if the objects are told to act under that force during a modeling session. Endowing objects with mass, moment of inertia, flexibility, and so on can further enable the Modeler to treat the assembly of a synthetic scene as the everyday manipulation of familiar elements. Allowing the Modeler to leverage off of real world expectations adds to the sense of familiarity and directness of the modeling experience but the goal of the ModelTime physical behaviors is not necessarily to accurately simulate the real world; only to feel predictable and consistent. So an object might have one set of physical attributes intended for the runtime that fully describe the object for accurate physical simulation and another set of physical attributes geared for ModelTime that allow the object to be manipulated in a useful, but less accurate way.

2.4.2.4. The Selection H ierarchy

Numerous modeling packages maintain a hierarchy of the elements contained in a scene. For example, a car is treated not as just a collection of polygons, but rather as a body, four wheels, lug nuts, and so on. The polygons representing one wheel are then grouped under the node in the hierarchy that represents that wheel.

At times the modeling hierarchy is at odds with how the user views an object. In the example of the car, the tire and lug nuts are both subassemblies of the wheel. What if only the lug nuts can be manipulated? Modeling hierarchies can be further refined to reinforce the user's expectation as to what elements are selectable and therefore manipulable. The model itself can contain the information necessary to maintain this selection hierarchy. The wheel would simply list the lug nuts and not the tire as selectable.

2.4.2.5. Smart Palettes

The assembly paradigm for rapid prototyping sketched out in this paper has an inherent drawback -- even with an extensive model library, it is difficult to create scenes that do not look repetitive. For example, in a scene targeted for driving simulation, if every house and tree is exactly the same, the subjective impression of the scene will be that of a shallow reality and much of the training benefit will be lost. It is a challenge to this approach to allow for variety within the constraints of a model library.

Most modeling packages allow the user to modify visual traits of objects such as color, material, and texture. Typically, the user is free to assign any available visual attribute to any object in the scene, even when inappropriate such as a car made of wood or a fish made of glass. By building smart palettes into models, attribute palettes designed specifically for the object, variety can be attained through multiple choice. A common example of this approach is the paint color selection program found in many paint stores. These programs allow the customer to select a general house style similar to his/her own and then select paint colors for the logical parts of the house such as the window trim, exterior walls, roof, Tudor strips and so on.

By building smart palettes into models and providing tools for selecting among the possible attributes for logical parts in a model, variety can be supported in a limited set of models. An example is assembling a neighborhood of tract houses from a few 'model' houses. Each house can be customized by the use of smart palettes to create a scene that is compellingly diverse.

3. CONCLUSION

The approach sketched out in this paper is intended to address a specific goal -- increased productivity in scene creation with a minimum of training. It is therefore limited in scope and pragmatic in nature. While it may seem at times a collection of disconnected techniques, it is believed that these individual techniques can be combined to be greater than the sum of the parts.

The ModelTime behaviors discussed here are by no means a comprehensive list; just a representative sampling intended to give the flavor of this approach. Numerous candidates such as planting behaviors are not discussed here. The list is expected to grow.

This approach does not apply in all situations. It is well suited to producing generic environments but can only approximate specific scenarios. Therefore a modeling package will always be a necessary component of site-specific scene creation and will serve as a companion product to any assembly package, not only for adding new components to the library and "filling the cracks" in a scene created by the assembler, but also to endow new models with ModelTime behaviors.

4. REFERENCES

1. [Mapes95] D. P. Mapes and P. K. Mlyniec, "3D Object Manipulation Techniques, Immersive Versus Non-Immersive Interfaces" SPIE Photonics West Proceedings, February 1995.


MultiGen and ModelTime and ModelTime Behaviors are registered trademarks and GameGen and SmartScene are trademarks of MultiGen, Inc. All other trademarks mentioned herein are property of their respective companies.

Copyright 1997 MultiGen Inc.

Preprint from The Engineering Reality of Virtual Reality, M.T. Bolas and S.S. Fisher, eds. Proc. SPIE 2409, 1995.